security control
Automated Reasoning for Vulnerability Management by Design
For securing systems, it is essential to manage their vulnerability posture and design appropriate security controls. Vulnerability management allows to proactively address vulnerabilities by incorporating pertinent security controls into systems designs. Current vulnerability management approaches do not support systematic reasoning about the vulnerability postures of systems designs. To effectively manage vulnerabilities and design security controls, we propose a formally grounded automated reasoning mechanism. We integrate the mechanism into an open-source security design tool and demonstrate its application through an illustrative example driven by real-world challenges. The automated reasoning mechanism allows system designers to identify vulnerabilities that are applicable to a specific system design, explicitly specify vulnerability mitigation options, declare selected controls, and thus systematically manage vulnerability postures.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > France > Occitanie (0.04)
Securing Agentic AI: A Comprehensive Threat Model and Mitigation Framework for Generative AI Agents
Narajala, Vineeth Sai, Narayan, Om
--As generative AI (GenAI) agents become more common in enterprise settings, they introduce security challenges that differ significantly from those posed by traditional systems. These agents aren't just LLMs--they reason, remember, and act, often with minimal human oversight. This paper introduces a comprehensive threat model tailored specifically for GenAI agents, focusing on how their autonomy, persistent memory access, complex reasoning, and tool integration create novel risks. This research work identifies 9 primary threats and organizes them across five key domains: cognitive architecture vulnerabilities, temporal persistence threats, operational execution vulnerabilities, trust boundary violations, and governance circumvention. These threats aren't just theoretical--they bring practical challenges such as delayed exploitability, cross-system propagation, cross system lateral movement, and subtle goal misalignments that are hard to detect with existing frameworks and standard approaches. T o help address this, the research work present two complementary frameworks: A TF AA (Advanced Threat Framework for Autonomous AI Agents), which organizes agent-specific risks, and SHIELD, a framework proposing practical mitigation strategies designed to reduce enterprise exposure. While this work builds on existing work in LLM and AI security, the focus is squarely on what makes agents different--and why those differences matter . Ultimately, this research argues that GenAI agents require a new lens for security. If we fail to adapt our threat models and defenses to account for their unique architecture and behavior, we risk turning a powerful new tool into a serious enterprise liability. Generative AI (GenAI) agents are emerging as a new category of enterprise technology.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.90)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.54)
Enhancing Security Control Production With Generative AI
Ling, Chen, Ghashami, Mina, Gao, Vianne, Torkamani, Ali, Vaulin, Ruslan, Mangam, Nivedita, Jain, Bhavya, Diwan, Farhan, SS, Malini, Cheng, Mingrui, Kumar, Shreya Tarur, Candelario, Felix
Security controls are mechanisms or policies designed for cloud based services to reduce risk, protect information, and ensure compliance with security regulations. The development of security controls is traditionally a labor-intensive and time-consuming process. This paper explores the use of Generative AI to accelerate the generation of security controls. We specifically focus on generating Gherkin codes which are the domain-specific language used to define the behavior of security controls in a structured and understandable format. By leveraging large language models and in-context learning, we propose a structured framework that reduces the time required for developing security controls from 2-3 days to less than one minute. Our approach integrates detailed task descriptions, step-by-step instructions, and retrieval-augmented generation to enhance the accuracy and efficiency of the generated Gherkin code. Initial evaluations on AWS cloud services demonstrate promising results, indicating that GenAI can effectively streamline the security control development process, thus providing a robust and dynamic safeguard for cloud-based infrastructures.
- Research Report (0.64)
- Workflow (0.50)
Security of and by Generative AI platforms
Hayagreevan, Hari, Khamaru, Souvik
This whitepaper highlights the dual importance of securing generative AI (genAI) platforms and leveraging genAI for cybersecurity. As genAI technologies proliferate, their misuse poses significant risks, including data breaches, model tampering, and malicious content generation. Securing these platforms is critical to protect sensitive data, ensure model integrity, and prevent adversarial attacks. Simultaneously, genAI presents opportunities for enhancing security by automating threat detection, vulnerability analysis, and incident response. The whitepaper explores strategies for robust security frameworks around genAI systems, while also showcasing how genAI can empower organizations to anticipate, detect, and mitigate sophisticated cyber threats.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.39)
Large Language Models for Code: Security Hardening and Adversarial Testing
Large language models (large LMs) are increasingly trained on massive codebases and used to generate code. However, LMs lack awareness of security and are found to frequently produce unsafe code. This work studies the security of LMs along two important axes: (i) security hardening, which aims to enhance LMs' reliability in generating secure code, and (ii) adversarial testing, which seeks to evaluate LMs' security at an adversarial standpoint. We address both of these by formulating a new security task called controlled code generation. The task is parametric and takes as input a binary property to guide the LM to generate secure or unsafe code, while preserving the LM's capability of generating functionally correct code. We propose a novel learning-based approach called SVEN to solve this task. SVEN leverages property-specific continuous vectors to guide program generation towards the given property, without modifying the LM's weights. Our training procedure optimizes these continuous vectors by enforcing specialized loss terms on different regions of code, using a high-quality dataset carefully curated by us. Our extensive evaluation shows that SVEN is highly effective in achieving strong security control. For instance, a state-of-the-art CodeGen LM with 2.7B parameters generates secure code for 59.1% of the time. When we employ SVEN to perform security hardening (or adversarial testing) on this LM, the ratio is significantly boosted to 92.3% (or degraded to 36.8%). Importantly, SVEN closely matches the original LMs in functional correctness.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Europe > Denmark > Capital Region > Copenhagen (0.05)
- North America > United States > New York > New York County > New York City (0.04)
Threats, Vulnerabilities, and Controls of Machine Learning Based Systems: A Survey and Taxonomy
Kawamoto, Yusuke, Miyake, Kazumasa, Konishi, Koichi, Oiwa, Yutaka
In this article, we propose the Artificial Intelligence Security Taxonomy to systematize the knowledge of threats, vulnerabilities, and security controls of machine-learning-based (ML-based) systems. We first classify the damage caused by attacks against ML-based systems, define ML-specific security, and discuss its characteristics. Next, we enumerate all relevant assets and stakeholders and provide a general taxonomy for ML-specific threats. Then, we collect a wide range of security controls against ML-specific threats through an extensive review of recent literature. Finally, we classify the vulnerabilities and controls of an ML-based system in terms of each vulnerable asset in the system's entire lifecycle.
- North America > United States > California > San Francisco County > San Francisco (0.28)
- North America > United States > California > Los Angeles County > Long Beach (0.14)
- Europe > Austria > Vienna (0.14)
- (42 more...)
- Research Report (1.00)
- Overview (1.00)
Aruba rolls out new AIOps capabilities
Network modernization is a key component of digital transformation initiatives for organizations looking to achieve better business outcomes. With that in mind, Aruba has announced its new Aruba Edge Services Platform with AIOps capabilities designed to reduce the time IT professionals spend on manual tasks such as network troubleshooting, performance tuning and Zero Trust/SASE security enforcement. As part of Aruba's growing family of AIOps solutions, these new capabilities aim to supplement overtaxed IT teams as they grapple with increasing network complexity and the rapid growth of IoT. For the first time, AIOps can be utilized for not just network troubleshooting but also performance optimization and critical security controls, Aruba said. With the growth of hybrid work, new user engagement models and challenges resulting from the Great Resignation and widening skills gaps, IT teams must find ways to achieve greater efficiencies and do away with time-intensive manual processes, the company said.
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Networks (0.51)
- Information Technology > Data Science > Data Mining > Big Data (0.33)
The dilemma of Defense in Depth
Defense in depth strategy has proven it's effectiveness in preventing cyber threats over the years. At the abstract level, most security controls are designed with two main components; 1) a knowledge base, and 2) a matching engine. Each security product has its own version of a growing knowledge base of feeds (whatever these feeds are). The content and how frequent these knowledge bases get updated are often the basis of competition between vendors. In this context, where these knowledge bases are complementary, defense in depth is meaningful.
Cultivating trust in AI
Trust is vital to economics, society, and sustainable development. That's equally true when it comes to artificial intelligence. To develop trusted AI, security should be an integral part of your AI development lifecycle. With every technology paradigm change, attackers are there to exploit capabilities. In response, cyber team defense patterns have also evolved.
- Information Technology > Security & Privacy (1.00)
- Government > Military (0.73)
Why is Cybersecurity Failing Against Ransomware?
Yes, security is hard – no one is ever 100 percent safe from the threats lurking out there. But how is it that time and time again, companies – big companies – are continuing to fall for ransomware attacks? Let's explore the main reasons why, starting with some basics before getting more in-depth: Two-factor authentication (2FA) is probably the easiest security improvement an organization can implement, and it's one of the most advocated-for solutions by infosec professionals. Despite this, we continue to see breaches like Colonial Pipeline occur because organizations have either failed to implement 2FA or have failed to *fully* implement it. Anything that requires a username and password to access should have 2FA enabled.
- North America > United States (0.48)
- Asia > Russia (0.29)
- Europe > Russia (0.14)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (0.48)
- Government > Military > Cyberwarfare (0.40)